Convergence Rates for Deterministic and Stochastic Subgradient Methods Without Lipschitz Continuity
نویسنده
چکیده
We extend the classic convergence rate theory for subgradient methods to apply to non-Lipschitz functions. For the deterministic projected subgradient method, we present a global O(1/ √ T ) convergence rate for any convex function which is locally Lipschitz around its minimizers. This approach is based on Shor’s classic subgradient analysis and implies generalizations of the standard convergence rates for gradient descent on functions with Lipschitz or Hölder continuous gradients. Further, we show a O(1/ √ T ) convergence rate for the stochastic projected subgradient method on convex functions with at most quadratic growth, which improves to O(1/T ) under either strong convexity or a weaker quadratic lower bound condition.
منابع مشابه
Radial Subgradient Descent
We present a subgradient method for minimizing non-smooth, non-Lipschitz convex optimization problems. The only structure assumed is that a strictly feasible point is known. We extend the work of Renegar [1] by taking a different perspective, leading to an algorithm which is conceptually more natural, has notably improved convergence rates, and for which the analysis is surprisingly simple. At ...
متن کامل"Efficient" Subgradient Methods for General Convex Optimization
A subgradient method is presented for solving general convex optimization problems, the main requirement being that a strictly-feasible point is known. A feasible sequence of iterates is generated, which converges to within user-specified error of optimality. Feasibility is maintained with a linesearch at each iteration, avoiding the need for orthogonal projections onto the feasible region (an ...
متن کاملLipschitz Stability for Stochastic Programs with Complete Recourse
This paper investigates the stability of optimal solution sets to stochastic programs with complete recourse, where the underlying probability measure is understood as a parameter varying in some space of probability measures.piro proved Lipschitz upper semicontinuity of the solution set mapping. Inspired by this result, we introduce a subgradient distance for probability distributions and esta...
متن کاملRandomized Block Subgradient Methods for Convex Nonsmooth and Stochastic Optimization
Block coordinate descent methods and stochastic subgradient methods have been extensively studied in optimization and machine learning. By combining randomized block sampling with stochastic subgradient methods based on dual averaging ([22, 36]), we present stochastic block dual averaging (SBDA)—a novel class of block subgradient methods for convex nonsmooth and stochastic optimization. SBDA re...
متن کاملQuantitative stability in stochastic programming
In this paper we study stability of optimal solutions of stochastic programming problems with fixed recourse. An upper bound for the rate of convergence is given in terms of the objective functions of the associated deterministic problems. As an example it is shown how it can be applied to derivation of the Law of Iterated Logarithm for the optimal solutions. It is also shown that in the case o...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- CoRR
دوره abs/1712.04104 شماره
صفحات -
تاریخ انتشار 2017